15 research outputs found
Risk-Averse Receding Horizon Motion Planning
This paper studies the problem of risk-averse receding horizon motion
planning for agents with uncertain dynamics, in the presence of stochastic,
dynamic obstacles. We propose a model predictive control (MPC) scheme that
formulates the obstacle avoidance constraint using coherent risk measures. To
handle disturbances, or process noise, in the state dynamics, the state
constraints are tightened in a risk-aware manner to provide a disturbance
feedback policy. We also propose a waypoint following algorithm that uses the
proposed MPC scheme for discrete distributions and prove its risk-sensitive
recursive feasibility while guaranteeing finite-time task completion. We
further investigate some commonly used coherent risk metrics, namely,
conditional value-at-risk (CVaR), entropic value-at-risk (EVaR), and g-entropic
risk measures, and propose a tractable incorporation within MPC. We illustrate
our framework via simulation studies.Comment: Submitted to Artificial Intelligence Journal, Special Issue on
Risk-aware Autonomous Systems: Theory and Practice. arXiv admin note: text
overlap with arXiv:2011.1121
Distributionally Robust Model Predictive Control with Total Variation Distance
This paper studies the problem of distributionally robust model predictive
control (MPC) using total variation distance ambiguity sets. For a
discrete-time linear system with additive disturbances, we provide a
conditional value-at-risk reformulation of the MPC optimization problem that is
distributionally robust in the expected cost and chance constraints. The
distributionally robust chance constraint is over-approximated as a tightened
chance constraint, wherein the tightening for each time step in the MPC can be
computed offline, hence reducing the computational burden. We conclude with
numerical experiments to support our results on the probabilistic guarantees
and computational efficiency
Risk-Sensitive Motion Planning using Entropic Value-at-Risk
We consider the problem of risk-sensitive motion planning in the presence of randomly moving obstacles. To this end, we adopt a model predictive control (MPC) scheme and pose the obstacle avoidance constraint in the MPC problem as a distributionally robust constraint with a KL divergence ambiguity set. This constraint is the dual representation of the Entropic Value-at-Risk (EVaR). Building upon this viewpoint, we propose an algorithm to follow waypoints and discuss its feasibility and completion in finite time. We compare the policies obtained using EVaR with those obtained using another common coherent risk measure, Conditional Value-at-Risk (CVaR), via numerical experiments for a 2D system. We also implement the waypoint following algorithm on a 3D quadcopter simulation
EVALUATING PERFORMANCE OF HYBRID NETWORKS BY USING LATENCY AND PDV
Hybrid networks are widely used in networking sector. They combine the finest features of both Wired and Wireless networks to give optimum results. Using different types of routing protocols, the capabilities of a hybrid network will be demonstrated using certain performance metrics. In this paper, we will be simulating real-time scenarios of three networks of different sizes. Each of these networks will be implemented with single routing protocol i.e. Enhanced Interior Gateway Routing Protocol (EIGRP). The networks will be simulated using Cisco Packet Tracer simulation tool. Furthermore, we have evaluated the performance of the networks by considering performance metrics like network latency and packet delay variation
Robots That Ask For Help: Uncertainty Alignment for Large Language Model Planners
Large language models (LLMs) exhibit a wide range of promising capabilities
-- from step-by-step planning to commonsense reasoning -- that may provide
utility for robots, but remain prone to confidently hallucinated predictions.
In this work, we present KnowNo, which is a framework for measuring and
aligning the uncertainty of LLM-based planners such that they know when they
don't know and ask for help when needed. KnowNo builds on the theory of
conformal prediction to provide statistical guarantees on task completion while
minimizing human help in complex multi-step planning settings. Experiments
across a variety of simulated and real robot setups that involve tasks with
different modes of ambiguity (e.g., from spatial to numeric uncertainties, from
human preferences to Winograd schemas) show that KnowNo performs favorably over
modern baselines (which may involve ensembles or extensive prompt tuning) in
terms of improving efficiency and autonomy, while providing formal assurances.
KnowNo can be used with LLMs out of the box without model-finetuning, and
suggests a promising lightweight approach to modeling uncertainty that can
complement and scale with the growing capabilities of foundation models.
Website: https://robot-help.github.ioComment: Conference on Robot Learning (CoRL) 2023, Oral Presentatio